Goto

Collaborating Authors

 explanation requirement


A taxonomy of explanations to support Explainability-by-Design

Tsakalakis, Niko, Stalla-Bourdillon, Sophie, Huynh, Trung Dong, Moreau, Luc

arXiv.org Artificial Intelligence

As automated decision-making solutions are increasingly applied to all aspects of everyday life, capabilities to generate meaningful explanations for a variety of stakeholders (i.e., decision-makers, recipients of decisions, auditors, regulators...) become crucial. In this paper, we present a taxonomy of explanations that was developed as part of a holistic 'Explainability-by-Design' approach for the purposes of the project PLEAD. The taxonomy was built with a view to produce explanations for a wide range of requirements stemming from a variety of regulatory frameworks or policies set at the organizational level either to translate high-level compliance requirements or to meet business needs. The taxonomy comprises nine dimensions. It is used as a stand-alone classifier of explanations conceived as detective controls, in order to aid supportive automated compliance strategies. A machinereadable format of the taxonomy is provided in the form of a light ontology and the benefits of starting the Explainability-by-Design journey with such a taxonomy are demonstrated through a series of examples.


Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis

Sovrano, Francesco, Lognoul, Michael, Vilone, Giulia

arXiv.org Artificial Intelligence

Significant investment and development have gone into integrating Artificial Intelligence (AI) in medical and healthcare applications, leading to advanced control systems in medical technology. However, the opacity of AI systems raises concerns about essential characteristics needed in such sensitive applications, like transparency and trustworthiness. Our study addresses these concerns by investigating a process for selecting the most adequate Explainable AI (XAI) methods to comply with the explanation requirements of key EU regulations in the context of smart bioelectronics for medical devices. The adopted methodology starts with categorising smart devices by their control mechanisms (open-loop, closed-loop, and semi-closed-loop systems) and delving into their technology. Then, we analyse these regulations to define their explainability requirements for the various devices and related goals. Simultaneously, we classify XAI methods by their explanatory objectives. This allows for matching legal explainability requirements with XAI explanatory goals and determining the suitable XAI algorithms for achieving them. Our findings provide a nuanced understanding of which XAI algorithms align better with EU regulations for different types of medical devices. We demonstrate this through practical case studies on different neural implants, from chronic disease management to advanced prosthetics. This study fills a crucial gap in aligning XAI applications in bioelectronics with stringent provisions of EU regulations. It provides a practical framework for developers and researchers, ensuring their AI innovations advance healthcare technology and adhere to legal and ethical standards.


A Methodology and Software Architecture to Support Explainability-by-Design

Huynh, Trung Dong, Tsakalakis, Niko, Helal, Ayah, Stalla-Bourdillon, Sophie, Moreau, Luc

arXiv.org Artificial Intelligence

Algorithms play a crucial role in many technological systems that control or affect various aspects of our lives. As a result, providing explanations for their decisions to address the needs of users and organisations is increasingly expected by laws, regulations, codes of conduct, and the public. However, as laws and regulations do not prescribe how to meet such expectations, organisations are often left to devise their own approaches to explainability, inevitably increasing the cost of compliance and good governance. Hence, we envision Explainability-by-Design, a holistic methodology characterised by proactive measures to include explanation capability in the design of decision-making systems. The methodology consists of three phases: (A) Explanation Requirement Analysis, (B) Explanation Technical Design, and (C) Explanation Validation. This paper describes phase (B), a technical workflow to implement explanation capability from requirements elicited by domain experts for a specific application context. Outputs of this phase are a set of configurations, allowing a reusable explanation service to exploit logs provided by the target application to create provenance traces of the application's decisions. The provenance then can be queried to extract relevant data points, which can be used in explanation plans to construct explanations personalised to their consumers. Following the workflow, organisations can design their decision-making systems to produce explanations that meet the specified requirements. To facilitate the process, we present a software architecture with reusable components to incorporate the resulting explanation capability into an application. Finally, we applied the workflow to two application scenarios and measured the associated development costs. It was shown that the approach is tractable in terms of development time, which can be as low as two hours per sentence.